@… I feel like this is how it will be. We will all have multiple accounts and the only ones using it cross-platform will be us, following each other on each account multiple times...because it's opt-in and nobody knows what it is their opting into....🤦♂️
On a Conjecture Concerning the Roots of Ehrhart Polynomials of Symmetric Edge Polytopes from Complete Multipartite Graphs
Max K\"olbl
https://arxiv.org/abs/2404.02136
ISW: Kremlin has yet to signal its response following Transnistria's appeal for 'protection': https://benborges.xyz/2024/02/29/isw-kremlin-has.html
This https://arxiv.org/abs/2403.11893 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qu…
Bifurcations and explicit unfoldings of grazing loops connecting one high multiplicity tangent point
Zhihao Fang, Xingwu Chen
https://arxiv.org/abs/2404.19455
Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom
Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu
https://arxiv.org/abs/2404.19509 https://arxiv.org/pdf/2404.19509
arXiv:2404.19509v1 Announce Type: new
Abstract: Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\textit{My Own Swordsman}$. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT-3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs' performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics.
@… There’s multiple distinct ways in which I miss the “good old days”:
1. The feeling of the web not being “serious”, in the sense that nothing in the world seemed to truly *depend* on the web. You could hang out with friends, turn in homework, do banking (Although I wasn’t old enough that it mattered to me yet), all without the web. The web was an *addi…
Military: Russian drone flew across Moldovan border: https://benborges.xyz/2024/02/27/military-russian-drone.html
This https://arxiv.org/abs/2208.02833 has been replaced.
link: https://scholar.google.com/scholar?q=a